Part 3
a
ABAQUS INP COMPREHENSIVE ANALYZER
Under the Hood — A Deep-Dive Series
PART 3
Material Properties, the Plotting Engine,
and the Recommendation System
From Property Tables to Diagnostics and Best-Practice Guidance
Joseph P. McFadden Sr.
McFaddenCAE.com | The Holistic Analyst
© 2026 Joseph P. McFadden Sr. All rights reserved.
Setup — What Material Data Actually Is
Parts One and Two of this series covered the structural side of the model: reading the file, identifying parts, extracting geometry, rendering surfaces. All of that is topology — the shape of things and how they connect.
Part Three is about what those parts are made of. Material data is where the physics lives. It is the bridge between the geometric model and the physical behavior you are trying to predict. Get it wrong — use the wrong modulus, misrepresent temperature dependence, omit failure criteria — and your simulation produces numbers that have no connection to reality, no matter how clean the mesh is.
This part of the program — the Materials tab, the Property Viewer tab, the Plotting tab, the Recommendations tab, and the Edit Proposals system — is where that data is made visible, auditable, and actionable. We will walk each piece in turn.
Section 1 — How Material Data Is Stored in Memory
By the time the interface loads, the material data has already been extracted — during the second parsing pass covered in Part One. What we have in memory is structured as a nested dictionary.
The top level is keyed by material name — the name as it appears after the equals sign in *MATERIAL. Each material name maps to a list of property records. Each property record is itself a dictionary with three fields: the keyword name, the keyword parameters, and the data.
The keyword name is the Abaqus property keyword: ELASTIC, DENSITY, EXPANSION, CONDUCTIVITY, SPECIFIC HEAT, PLASTIC, DAMPING. The parameters are the options specified on that keyword line — for example, MODULI=LONG TERM on a viscoelastic definition. The data is a list of rows, where each row is a list of floating-point numbers.
To make this concrete: a steel material with temperature-dependent elastic properties might have an ELASTIC record with twelve data rows. Each row has three values: Young's modulus E, Poisson's ratio nu, and temperature T. A material with a simple constant density has a DENSITY record with one row containing a single value.
A more complex material — a polymer used in a drop test simulation — might have ELASTIC data, DENSITY data, EXPANSION data for thermal strain calculation, and PLASTIC data with a yield curve defined at multiple temperatures. All of these appear as separate property records in the list for that material name.
The material names set is assembled from the *MATERIAL keyword pass. The mat_props dictionary is assembled from the property keyword pass. The two are linked by the current_material variable in the parser's state machine — when a DENSITY keyword appears, it gets attached to whichever material was most recently declared.
This linkage is maintained purely through sequential state. There is no explicit cross-referencing step. The parser knows which material is active because it saw the most recent MATERIAL line, and it keeps that in state until the next one. This is why the file structure matters: if a material property keyword appears before any MATERIAL line, it has no material to attach to and is silently discarded.
Section 2 — The Materials Tab: Selection, Details, and the Parts Map
The Materials tab is the first place most users go after processing a model. It gives you the complete material inventory: every material in the file, which sections reference each material, and which parts use each section.
The left panel is a scrollable listbox containing material names, alphabetically sorted. There is a search and filter control above it — a text entry and an Apply button. When you type a substring and click Apply, the listbox repopulates with only the materials whose names contain that substring. Clear restores the full list.
This filter is particularly useful in large models. A mobile device assembly might have fifteen or twenty material definitions: two or three grades of aluminum, a board material, several solder alloys, various polymers, adhesives, and potting compounds. Finding the one you want in an unsorted list of twenty entries requires scanning. A filter that responds to typing 'Al' and showing only aluminum alloys cuts that scan to a few entries.
When you select a material, the right panel displays the material details. This is a scrolled text widget rendered in a monospace font. It shows the material name at the top, then for each property record: the keyword and its parameters on one line, followed by the data table with inferred column headers.
The inferred headers come from the KNOWN_PROP_HEADERS dictionary. That dictionary maps each property keyword to a list of standard column names. ELASTIC maps to E, nu, T. DENSITY maps to rho, T. EXPANSION maps to alpha, T. CONDUCTIVITY maps to k, T. SPECIFIC HEAT maps to cp, T. PLASTIC maps to yield, plastic strain, T. DAMPING maps to alpha, beta.
When the data is displayed, the infer_headers function looks up the keyword, takes the first N entries from the known header list where N is the number of columns in the data, and uses those as column labels. If the data has more columns than the known headers cover, synthetic column names are generated: c4, c5, c6, and so on.
This column inference is an educated guess based on Abaqus conventions, not a guarantee. Temperature-independent elastic data has two columns: E and nu. Temperature-dependent elastic data has three: E, nu, T. The inferred header nu reads correctly in both cases because the function takes the first N entries from the known list. But if a material has a non-standard property definition — an orthotropic elastic definition with nine constants on one line, for example — the column headers will be wrong. The program displays what it can infer, and the analyst is expected to verify against the actual model definition for anything non-standard.
The Parts Using This Material Panel
Below the material details is the Parts Using This Material panel — a listbox with extended multi-selection enabled. This shows every part that was associated with the selected material through the material-section-part mapping built during the second parse pass.
This reverse lookup is one of the most practically useful features in the Materials tab. You have a material — say, a specific aluminum alloy — and you want to know which parts in the assembly use it. Select the material, and the parts list populates immediately. Select some or all of those parts, add them to the Parts tab pre-selection, and you can view or export just the aluminum parts as a group.
The Export Parts CSV button writes the mapping — material to part names — to a comma-separated file for external analysis. The Export DOT Graph button writes a graph-description file in the DOT language format, which can be rendered by Graphviz to produce a visual diagram of the material-section-part relationships. These are the network analysis exports: they treat the model's material assignments as a graph and let you see its structure.
Section 3 — The Property Viewer Tab: Dataset Navigation
The Property Viewer tab is a dedicated reader for raw material property data. Its purpose is simple: show you the exact numbers the solver will use, without any interpretation, so you can verify them yourself.
The interface has two listboxes on the left and a text display on the right. The top listbox is the material selector — the same list of material names as in the Materials tab. The bottom listbox is the dataset selector — it shows all property records for the selected material.
When you select a material, the dataset list populates with one entry per property record. Each entry shows the keyword name followed by the row count: ELASTIC (12 rows), DENSITY (1 row), PLASTIC (8 rows). Selecting an entry in the dataset list populates the right panel with the data table for that dataset, formatted with inferred column headers.
The right panel uses a scrolled text widget in a monospace font. Monospace is deliberate — it is the only font class that aligns column-formatted text reliably. A proportional font like Arial makes the numbers in adjacent rows appear to shift horizontally, which makes scanning a data table for anomalies much harder. Monospace keeps columns visually aligned.
The data is displayed exactly as parsed: floating-point numbers in their original precision from the file, with no rounding or reformatting. If the file has a value written as 2.07E+05, that is what you see. This matters because it lets you verify that the value was read correctly — not normalized, not scaled, not converted.
This is an important principle. A tool that displays processed or reformatted values hides the connection between the original file and the analysis. A tool that shows you the raw values gives you the foundation to cross-check manually. If your steel modulus shows as 207000 and you expected 200000, you know to go back to the file and investigate. If it showed as approximately 2.1 × 10⁵, the discrepancy might go unnoticed.
Section 4 — The Plotting Tab: Visualizing Material Behavior
Data tables tell you numbers. Plots tell you behavior. The Plotting tab converts material property datasets into graphs — the visual representation of how a material property changes as a function of another.
The most important use case for this tab is temperature-dependent properties. Modern simulation models — especially for drop testing and thermal-mechanical analysis — use materials whose stiffness, yield strength, and expansion coefficient all change with temperature. A material card might have twelve rows of elastic data covering temperatures from −40°C to +250°C. The only way to quickly assess whether that curve is physically reasonable is to see it.
The Plot Selection Flow
The interface has two listboxes on the left: a material selector and a dataset selector. Select a material, select a dataset, and two dropdown menus populate with the column names inferred for that dataset. One dropdown is the X axis column, the other is the Y axis column.
The dropdowns are populated dynamically — the available column choices change based on which dataset is selected. An ELASTIC dataset with three columns offers E, nu, and T as choices. A PLASTIC dataset offers yield, plastic_strain, and T. You choose which quantity to plot against which.
Click Plot X versus Y and a matplotlib figure window opens. The function extracts all data rows for the selected dataset, takes the column index for the X choice and the column index for the Y choice, converts the values to floating-point numbers, and passes them to plt.plot with circular markers and a connecting line. A grid is drawn, axes are labeled with the column names, and the title shows the material name, keyword, and axis names.
Auto Plot E of T
The Auto Plot E of T button is a one-click shortcut for the most common use case: temperature-dependent Young's modulus. The function searches the material's property records for the first ELASTIC dataset that has both an E column and a T column — which means temperature-dependent data. If found, it automatically assigns T to the X axis and E to the Y axis, then generates the plot without requiring you to select the columns manually.
This reflects a deliberate design choice worth noting. The common case should be frictionless. You should not have to select a material, select ELASTIC, then choose T as X and E as Y every time you want to see the stiffness curve. One click, result visible. The general mechanism — the manual X and Y selection — is there for everything else.
What Plots Tell You That Tables Do Not
A smooth, monotonically decreasing stiffness curve with temperature is physically expected for most metals and polymers. A curve that increases steeply at high temperature, or that has a sharp discontinuity between two data points, is a red flag — either an error in the material card or a material model that the analyst needs to understand more deeply before trusting the simulation.
A yield curve — stress versus plastic strain — should be monotonically increasing. A yield curve that decreases after some strain level implies material softening, which is physically possible under strain localization but should be deliberate, not accidental.
These checks are things you cannot see in a column of numbers without careful cross-comparison. A plot makes them visible in two seconds.
The Export Dataset CSV button on the Plotting tab writes the selected dataset — with inferred headers — to a CSV file. This is for users who want to pull the data into Excel or a Python script for more detailed analysis, independent of this tool.
Section 5 — The Sections Tab: Connecting Material to Mesh Region
The Sections tab sits between Materials and Parts in the program's conceptual flow. It displays the section definitions — the explicit assignments that link a material to a set of elements through an element set name.
A section definition in Abaqus is the formal declaration that a specific group of elements — identified by an element set name — is made of a specific material and has a specific section type. For solid elements, the section type is solid. For shell elements, the section type is shell, with thickness. For membrane elements, the type is membrane, also with thickness.
Without a section definition, elements in a model have no material. They are topologically present — they exist in the connectivity table — but the solver cannot compute stresses because it does not know the constitutive relationship. A section definition is what makes an element physically meaningful.
The Sections tab shows all section definitions extracted during parsing. The left panel is a scrollable list with the same filter control as the Materials tab. The right panel displays the section details: the raw keyword line as it appeared in the file, the section type, the material name, the element set, and — for shell and membrane sections — the thickness value.
Recall from Part One that shell and membrane thickness is not on the keyword line itself. It is on the first data line that follows the section keyword. The parser captures it and stores it on the section record. The Sections tab surfaces it so you can verify it is correct without having to find the line in the raw file.
The Parts Using This Section panel below the section details mirrors the Parts Using This Material panel in the Materials tab. It shows which parts are associated with the selected section — derived from the element-set ownership tracking built during the second parse pass.
Both panels support multi-selection and have an Add to Pre-selection button. Pre-selection is the staging area in the Parts tab: you can build up a set of parts from multiple materials or sections without having to remember which parts you wanted, then act on all of them at once.
Section 6 — The Recommendation Engine: Architecture
The Recommendations tab is where the program shifts from display to diagnosis. Instead of showing you what is in the model, it tells you what might be wrong with it — or what best practices the model does or does not follow.
The recommendation system has two layers that are architecturally separate but displayed together in the same tab.
The first layer is the best practices module — an external Python module called best_practices. It is imported at program startup and called during model processing. It receives the parsed model data and returns a list of recommendation objects — structured records each containing a severity level, a category, a title, a description, a recommended action, and an optional reference.
The second layer is the edit proposals system — a set of specific, executable changes to the INP file. These are older and more focused: they check for specific conditions and propose concrete keyword modifications to address them.
Severity Levels and Categories
Best practices recommendations use a four-level severity system. CRITICAL means a condition that is highly likely to produce incorrect results — an error, not a warning. WARNING means a condition that often causes problems and should be reviewed carefully. INFO means a condition worth knowing about that may or may not be a problem depending on intent. SUGGESTION means an optional improvement that follows best practice but is not strictly necessary.
Categories group recommendations by the type of issue: ELEMENT QUALITY, CONTACT, MATERIAL, OUTPUT, PERFORMANCE, STABILITY, BOUNDARY CONDITIONS, and others. This lets you filter mentally — if you are in a hurry to run and you know your output settings are already configured, you can skip the OUTPUT recommendations and focus on STABILITY.
In the interface, each recommendation is displayed with a colored severity prefix. Critical items appear in red. Warnings appear in orange. Info items appear in blue. Suggestions appear in green. Selecting an item in the list displays the full details in the right panel — severity, category, title, full description, the recommended action to take, and the reference source if one is provided.
Section 7 — What the Recommendation Engine Checks
The following six edit proposals are directly visible in the source code and illustrate exactly how the engine reasons.
Edit Proposal E1 — C3D8R to C3D8I Element Upgrade
The C3D8R is a reduced-integration eight-noded brick element. The R suffix means reduced integration: instead of using the full 2×2×2 Gauss point scheme — eight integration points — it uses only one point at the element center.
Reduced integration cuts computation time significantly. But for bending problems — scenarios where the element is subjected to bending rather than pure tension or compression — reduced integration introduces a known accuracy problem. The single-point integration cannot capture the bending strain gradient across the element thickness. The result is that bending stiffness is underestimated, strains at the surface of the part are underpredicted, and stress results near the outer fibers are too low.
For a drop test simulation, where a glass panel or a plastic housing is being bent by an impact load, underpredicting surface strain is exactly the wrong direction to err. It makes the part appear more robust than it is.
The C3D8I adds incompatible modes — extra internal degrees of freedom that capture the bending strain gradient without adding nodes. It is more expensive than C3D8R but dramatically more accurate for bending. The recommendation is to upgrade.
The proposal scans every *ELEMENT line in the file for the pattern TYPE=C3D8R. Each match is shown as a diff — the original line prefixed with a minus sign, the proposed line with C3D8I substituted prefixed with a plus sign. If you apply the proposal, the substitution is made in the in-memory line list. If you then export a modified INP, the exported file contains the upgraded element type.
Edit Proposal E2 — TIE Constraint Hardening
*TIE constraints are used to bond two surfaces that share no nodes. The constraint mathematically forces the nodal degrees of freedom on one surface to follow the other. This is the standard way to connect dissimilar meshes — for example, bonding a fine solder joint mesh to a coarser board mesh without requiring node coincidence.
The default behavior of *TIE includes an ADJUST parameter that is YES by default. ADJUST allows Abaqus to move slave nodes to exactly match the master surface before applying the constraint. In a well-meshed model with good surface proximity, this is fine. In a model where the surfaces are slightly offset — as happens when parts were meshed independently with no guaranteed node coincidence — ADJUST can snap nodes to unexpected positions, introducing artificial strain and over-constraining the interface.
Setting ADJUST=NO prevents this node movement. The constraint is enforced exactly at the nodes' current positions. Combined with a POSITION TOLERANCE of zero — which means only nodes within zero distance of the master surface are tied — the constraint is hardened against over-aggressive node snapping.
The proposal scans every *TIE line, checks for ADJUST and POSITION TOLERANCE parameters, and proposes adding or correcting them. The preview shows the before and after for every TIE constraint in the file.
Edit Proposal E3 — Field Output Block
A model without field output requests produces no stress, strain, displacement, or velocity results in the ODB file. This is a particularly easy mistake to make when building a model from a template that had output requests, then modifying the step type or step parameters in a way that invalidates the original requests.
The proposal checks whether the summary info dictionary shows any output field entries. If it does not, the proposal adds a standard field output block: OUTPUT, FIELD; ELEMENT OUTPUT requesting S, E, PE, and PEEQ — stress, strain, plastic strain, and equivalent plastic strain; and *NODE OUTPUT requesting U, V, and A — displacement, velocity, and acceleration. These are the minimum useful outputs for a drop simulation.
The insertion point is immediately after the first *STEP keyword in the file, ensuring the output block is in scope for the analysis step.
Edit Proposal E4 — Bulk Viscosity
Bulk viscosity is a numerical damping parameter used in explicit dynamic analyses to prevent non-physical oscillations in the shock wave that propagates through the mesh during impact. Abaqus Explicit has a default bulk viscosity, but the default is not always appropriate for all impact scenarios.
The recommended values — 0.06 for linear bulk viscosity and 1.2 for quadratic bulk viscosity — are tuned for drop simulations. The linear term damps long-wavelength oscillations. The quadratic term damps sharp-front shock waves. The specific values reflect accumulated experience in drop simulation engineering.
This proposal only activates if the model is an explicit dynamic simulation — verified by checking the simulation types list for DYNAMIC EXPLICIT. It has no relevance for implicit or static analyses and would never be proposed for them.
Edit Proposal E5 — Mass Scaling
Mass scaling artificially increases the density of elements to increase the stable time step in an explicit analysis. A larger time step means fewer increments to simulate the same event time, which means faster runs.
The problem is that mass scaling changes the inertia of the model. For a genuine dynamic event — an actual drop — the added inertia alters the dynamics. The accelerations, contact forces, and energy distribution all change. If mass scaling is large enough to significantly affect results, you are no longer simulating the original problem.
The proposal locates *MASS SCALING keywords and comments them out with double-asterisk comment markers, adding an annotation explaining why. This is a non-destructive modification — the original line is preserved as a comment and can be restored by removing the leading double asterisk.
Like E4, this proposal only activates for explicit simulations.
Edit Proposal E6 — History Output for Energy Balance
Energy balance monitoring is one of the most important quality checks in explicit dynamic simulation. In a well-behaved simulation, the ratio of kinetic energy to internal energy should remain below five to ten percent throughout the event. If kinetic energy is large relative to internal strain energy, the event is not quasi-static — the dynamics matter and a static analysis would be inappropriate.
More critically, if total energy — ETOTAL — drifts significantly from its initial value, numerical energy is either being created or destroyed. Either condition indicates a problem: hourglass modes adding artificial energy, contact instabilities, or time step issues.
The proposal adds a history output block requesting ALLKE (total kinetic energy), ALLIE (total internal energy), ALLSE (total strain energy), ALLPD (plastic dissipation), and ETOTAL (total energy). These output to the history portion of the ODB file at every increment, making the energy trace available for review.
This is the energy audit trail. If your simulation passes the energy balance check — ETOTAL flat, kinetic energy small relative to internal — you have meaningful evidence that the result is physically reasonable.
Section 8 — The Preview and Apply Architecture
Every edit proposal is structured as a dictionary with five fields: an ID, a title, a description, a preview function, and an apply function. The preview and apply are Python functions — callable objects that can be called at any time.
The preview function takes no arguments. It reads the original file lines — which were stored at parse time — and produces a diff-format text string showing what would change. Lines to be removed are prefixed with a minus sign. Lines to be added are prefixed with a plus sign. Lines that are unchanged are not shown.
This preview is generated fresh every time it is called, against the current state of the in-memory line list. If you have already applied some proposals, the preview for subsequent proposals reflects the already-modified lines.
The apply function takes the current line list and returns a new line list with the modification applied. It uses Python list comprehensions and the regular expression substitution function for pattern-based changes — for example, replacing C3D8R with C3D8I throughout all element keyword lines.
For insertions — adding output blocks or bulk viscosity — the apply function scans the line list for a target keyword, inserts the new lines immediately after, and returns the modified list. The original list is not modified in place. A new list is returned. This means you can call preview again after applying and compare.
The Edits tab — separate from the Recommendations tab — has checkboxes for each proposal. You can select any combination of proposals, preview all of them simultaneously as a combined diff, and then apply them to the in-memory line list. When you export a modified INP, you can choose to include any applied edits or export the original without changes.
No changes are ever made to the original file on disk. The program only modifies the in-memory copy of the lines. The original file is always preserved. This non-destructive approach is not optional — it is fundamental. A tool that silently modifies your working file would be dangerous to use in a production workflow.
Section 9 — The Best Practices Module: A Broader Lens
The edit proposals system — the six proposals described above — was designed specifically for Abaqus Explicit drop simulations. That is the original focus of this tool.
The best practices module, introduced in later versions, takes a broader view. It is a separate Python module — best_practices.py — that can be imported independently of the rest of the program. It exports a set of functions and two enumeration classes: Severity and Category.
The generate_recommendations function is the main entry point. It accepts the full parsed model data — the summary info dictionary and the material-section mapping — and returns a list of Recommendation objects. Each Recommendation is a structured data class with severity, category, title, description, action, and reference fields.
Because the best practices module is a separate file and a clean interface, it can be extended independently. Adding a new recommendation means adding a new function or a new condition check inside best_practices.py and returning an additional Recommendation object from generate_recommendations. Nothing else in the program needs to change.
This architectural separation — the recommendation logic in its own module, isolated from the interface code — is deliberate. It makes the recommendation rules testable in isolation, shareable between different tools, and updatable without risking changes to the display or parsing code.
The best practices module uses the same parsed data that everything else in the program uses. It looks at element types in the summary info dictionary and flags patterns that indicate risk. It looks at simulation types and checks whether appropriate control parameters are set. It looks at material data and flags materials that appear to be missing required properties for the detected simulation type.
The result is a diagnostics list that covers the entire model holistically, not just the narrow edit-proposal checklist.
Section 10 — Simulation Intent and Purpose-Specific Recommendations
The newest layer of the recommendation system — introduced in version 15.7 — is the simulation intent classifier. We touched on this at the end of Part One. Here we look at how it connects to the recommendation engine.
After the intent classifier scores the model and produces a top classification, the accept-reject-redirect workflow determines the path forward.
If you accept the classification — or if you override it with the correct simulation type — the function get_purpose_recommendations is called with that simulation type as input. It returns a list of recommendations calibrated specifically to that purpose.
A drop test model that is correctly classified gets recommendations about energy balance monitoring, mass scaling impact, contact stability, element type selection for impact. A modal analysis model gets recommendations about boundary condition completeness, mass verification, mode sufficiency checks, and residual vector implementation. These are completely different sets of guidance, and presenting the wrong set — drop test advice for a modal model — would add confusion rather than value.
The Gap Analysis
If you reject the classifier's result and state a different simulation type, the gap analysis activates. The generate_gap_analysis function compares the signals detected in the model against the signals expected for the stated purpose.
It produces two lists: what the model has that is consistent with the stated purpose, and what the model is missing or has wrong for that purpose. It also produces a conversion checklist: the specific keyword additions, modifications, or removals needed to make the model appropriate for the stated simulation type.
This is a diagnostic tool for model conversion. If you inherit a model built for static analysis and need to repurpose it for explicit dynamics, the gap analysis tells you exactly what is missing: no DYNAMIC EXPLICIT step, no mass scaling assessment, no bulk viscosity, no appropriate output requests. You do not need to enumerate those requirements from memory.
The evaluation report export writes the full classification result — top purpose, confidence, evidence chain, gap analysis if applicable — to a text file. In a QA workflow where you need to document the pre-run model review, this report is the artifact. It records what the program found, what decision was made, and what actions were recommended.
Section 11 — The CAE File Path: An Export Wrapper
There is one more aspect of the program's input handling worth discussing in this part: the CAE file path. While the program is built around INP files, it also accepts Abaqus CAE binary files — the native Abaqus Workbench format — as input.
A CAE file is not a text file. It is a binary database containing the model, the mesh, the material assignments, the step definitions, the analysis settings — everything. It cannot be parsed directly by the same text-based functions that read INP files.
The program's approach is a pragmatic wrapper. If the selected file ends in .cae, the program does not attempt to parse it directly. Instead, it calls the Abaqus command-line interface — specifically the abaqus cae command with a noGUI Python script — to export the CAE model to an INP file. That exported INP file is then processed through the same pipeline as any other INP.
The export script is embedded in the program as a multi-line Python string. It is written to a temporary file, then passed to the abaqus cae command as the script argument. The script uses the global openMdb function — not a method on the mdb object, which is a distinction that was confirmed through external AI review — to load the CAE file, iterates through the models in the loaded database, creates a job object for each model, and calls writeInput to produce the INP.
The subprocess runs with a timeout of 300 seconds. The output is captured and scanned for SUCCESS and ERROR markers embedded by the script. On success, the exported INP path is returned to the main program, which then processes it normally.
This path requires Abaqus to be installed and available in the system PATH. If it is not, the error message tells you exactly why: Abaqus command not found. There is no attempt to work around the absence of Abaqus for CAE export — the dependency is explicit and the failure mode is clear.
The temporary INP file from the export is cleaned up when the program closes. A reference to the temp file path is stored on the application object so cleanup can find it.
Section 12 — Critical Thinking: What the Material System Reveals
Material data is almost always the weakest link in a simulation model.
Mesh quality can be checked geometrically. Boundary conditions can be verified by inspection. Contact definitions can be traced to physical interfaces. But material data — the numbers that represent the constitutive behavior of the physical materials in the model — comes from somewhere outside the simulation: test data, literature values, datasheet specifications, or engineering judgment.
The most common material data problem is not wrong values. It is values from the wrong context. A modulus measured at room temperature applied to a simulation running at elevated temperature. A yield strength from a datasheet that represents the minimum specification, applied to a material that is actually at the midpoint of the distribution. A density copied from a general reference for a material family, rather than from the specific alloy and temper in the model.
These problems are not detectable from the file alone. The program can tell you that a material has a modulus of 200,000 and that this value is consistent with steel in the millimeter-tonne-second unit system. It cannot tell you whether that specific value is appropriate for the specific grade of steel, the specific processing condition, and the specific temperature range of the simulation.
What the program does — by surfacing the raw values, providing plotting for temperature dependence, and running unit detection against the material library — is give you the information you need to make that judgment yourself. That is the correct role of an analysis tool.
The recommendation engine extends this by checking patterns. Not just values, but relationships. Does this explicit dynamic model have energy balance output? Does this model with contact pairs have friction defined? Does this model with temperature-dependent materials have initial conditions for temperature? These are the pattern checks that experienced analysts run mentally when reviewing a model. The engine makes them automatic.
But — and this is important — the engine cannot make the final judgment. It can flag that energy balance output is missing. It cannot know whether energy balance is actually going to be an issue for this specific model at this specific mesh density. You still have to decide. The tool gives you data and flags. The engineer provides judgment.
Section 13 — The Complete Material System Pipeline, End to End
One final numbered summary, consistent with the end of each part in this series.
1. During the second parse pass, every MATERIAL keyword sets the current material name. Every subsequent property keyword — ELASTIC, DENSITY, PLASTIC, and others — creates a property record attached to that material. Data lines are parsed as floating-point rows and appended to the record. This continues until the next MATERIAL keyword or until the end of the file.
2. The material-section-part mapping assembles: sections are linked to materials via the MATERIAL parameter; sections are linked to parts via the part stack context and the element-set ownership dictionary.
3. After processing, the Materials tab populates from the sorted material names list. Selecting a material triggers on_select_material, which retrieves the material's section list from the mapping, formats it with inferred column headers, and displays it in the details panel. The parts list populates from the mapping's associated parts.
4. The Property Viewer tab's dataset list populates from the mat_props dictionary for the selected material. Selecting a dataset retrieves its rows, infers column headers, and displays the formatted data table.
5. The Plotting tab's X and Y column dropdowns populate from the inferred headers of the selected dataset. Plot X versus Y extracts the two column arrays, calls plt.plot, and opens a matplotlib figure. Auto Plot E of T searches for ELASTIC data with a T column and plots it automatically.
6. The Recommendations tab populates from two sources: the generate_recommendations call that ran during processing, and the propose_edits call that built the edit proposal list. Best practices recommendations are listed first with their severity icons. Edit proposals follow.
7. Selecting a recommendation shows full details — severity in color, category, title, description, recommended action, reference. Selecting an edit proposal shows its preview diff.
8. In the Edits tab, proposals can be selected by checkbox, previewed as a combined diff, and applied to the in-memory line list. Export then writes the modified line list to a new INP file.
9. After intent classification, get_purpose_recommendations adds purpose-specific guidance. If classification is rejected, generate_gap_analysis produces the conversion checklist.
That is the complete material and recommendation pipeline — from a text table of numbers in a file to an actionable, auditable, exportable analysis of the model's material definitions and best-practice compliance.
Closing — The Chain from File to Judgment
We have now covered three complete pipelines across these first three parts.
Model processing: from raw text to structured assembly with parts, nodes, elements, volumes, and relationships.
Geometry: from connectivity tables to exterior surfaces, triangulated meshes, rendered views, and STL files.
Material and recommendations: from property data rows to auditable tables, behavior plots, and diagnostics.
The chain running through all three is consistent. The program reads what is in the file, structures it, makes it visible, and surfaces what it can assess. It does not make decisions for you. It does not hide its methods. Every piece of data displayed can be traced back to a specific line in a specific file. Every recommendation references a specific pattern it found in the parsed data.
That traceability is the goal. Not a tool that produces answers, but a tool that produces evidence — evidence you can evaluate, challenge, and build your own engineering judgment on top of.
Part Four of this series will cover the INP export system — how the program writes modified assemblies back to disk, how it handles the structural requirements of a valid Abaqus input file, and how the sub-assembly export produces clean, self-contained INP files from selected parts of a larger model.
The source code, documentation, and companion readers for this series are at McFaddenCAE.com.
End of Part 3 — Material Properties, Plotting, and the Recommendation System
Next: Part 4 — INP Export, Sub-Assembly Extraction, and the Output Pipeline
© 2026 Joseph P. McFadden Sr. All rights reserved. | McFaddenCAE.com